- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0002000001000100
- More
- Availability
-
22
- Author / Contributor
- Filter by Author / Creator
-
-
Kurlandski, Luke (4)
-
Pan, Yin (3)
-
Wright, Matthew (3)
-
Berger, Harel (2)
-
Abourahma, Heba (1)
-
Leven, Maximilian (1)
-
McGee, David_J (1)
-
Mosli, Rayan (1)
-
Soelen, Matthew_Van (1)
-
Stolz, Daniel (1)
-
Strobelt, Jonas (1)
-
Thianphan, Sirapat (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Malware poses an increasing threat to critical computing infrastructure, driving demand for more advanced detection and analysis methods. Although raw-binary malware classifiers show promise, they are limited in their capabilities and struggle with the challenges of modeling long sequences. Meanwhile, the rise of large language models (LLMs) in natural language processing showcases the power of massive, self-supervised models trained on heterogeneous datasets, offering flexible representations for numerous downstream tasks. The success behind these models is rooted in the size and quality of their training data, the expressiveness and scalability of their neural architecture, and their ability to learn from unlabeled data in a self-supervised manner. In this work, we take the first steps toward developing large malware language models (LMLMs), the malware analog to LLMs. We tackle the core aspects of this objective, namely, questions about data, models, pretraining, and finetuning. By pretraining a malware classification model with language modeling objectives, we were able to improve downstream performance on diverse practical malware classification tasks on average by 1.1% and up to 28.6%, indicating that these models could serve to succeed raw-binary malware classifiers.more » « lessFree, publicly-accessible full text available February 24, 2027
-
Kurlandski, Luke; Mosli, Rayan; Pan, Yin; Thianphan, Sirapat; Wright, Matthew (, IEEE)Free, publicly-accessible full text available June 30, 2026
-
Strobelt, Jonas; Stolz, Daniel; Leven, Maximilian; Soelen, Matthew_Van; Kurlandski, Luke; Abourahma, Heba; McGee, David_J (, Optics Express)A versatile system for the fabrication of surface microstructures is demonstrated by combining the photomechanical response of supramolecular azopolymers with structured polarized illumination from a high resolution spatial light modulator. Surface relief structures with periods 900 nm - 16.5 µm and amplitudes up to 1.0 µm can be fabricated with a single 5 sec exposure at 488 nm. Sinusoidal, circular, and chirped surface profiles can be fabricated via direct programming of the spatial light modulator, with no optomechanical realignment required. Surface microstructures can be combined into macroscopic areas by mechanical translation followed by exposure. The surface structures grow immediately in response to illumination, can be visually observed in real time, and require no post-exposure processing.more » « less
An official website of the United States government
